Many commodity crops have growth stages during which they are particularly vulnerable to stress-induced yield loss. In-season crop progress information is useful for quantifying crop risk, and satellite remote sensing (RS) can be used to track progress at regional scales. At present, all existing RS-based crop progress estimation (CPE) methods which target crop-specific stages rely on ground truth data for training/calibration. This reliance on ground survey data confines CPE methods to surveyed regions, limiting their utility. In this study, a new method is developed for conducting RS-based in-season CPE in unsurveyed regions by combining data from surveyed regions with synthetic crop progress data generated for an unsurveyed region. Corn-growing zones in Argentina were used as surrogate 'unsurveyed' regions. Existing weather generation, crop growth, and optical radiative transfer models were linked to produce synthetic weather, crop progress, and canopy reflectance data. A neural network (NN) method based upon bi-directional Long Short-Term Memory was trained separately on surveyed data, synthetic data, and two different combinations of surveyed and synthetic data. A stopping criterion was developed which uses the weighted divergence of surveyed and synthetic data validation loss. Net F1 scores across all crop progress stages increased by 8.7% when trained on a combination of surveyed region and synthetic data, and overall performance was only 21% lower than when the NN was trained on surveyed data and applied in the US Midwest. Performance gain from synthetic data was greatest in zones with dual planting windows, while the inclusion of surveyed region data from the US Midwest helped mitigate NN sensitivity to noise in NDVI data. Overall results suggest in-season CPE in other unsurveyed regions may be possible with increased quantity and variety of synthetic crop progress data.
translated by 谷歌翻译
Text-to-image generation methods produce high-resolution and high-quality images, but these methods should not produce immoral images that may contain inappropriate content from the commonsense morality perspective. Conventional approaches often neglect these ethical concerns, and existing solutions are limited in avoiding immoral image generation. In this paper, we aim to automatically judge the immorality of synthesized images and manipulate these images into a moral alternative. To this end, we build a model that has the three main primitives: (1) our model recognizes the visual commonsense immorality of a given image, (2) our model localizes or highlights immoral visual (and textual) attributes that make the image immoral, and (3) our model manipulates a given immoral image into a morally-qualifying alternative. We experiment with the state-of-the-art Stable Diffusion text-to-image generation model and show the effectiveness of our ethical image manipulation. Our human study confirms that ours is indeed able to generate morally-satisfying images from immoral ones. Our implementation will be publicly available upon publication to be widely used as a new safety checker for text-to-image generation models.
translated by 谷歌翻译
One of the important topics in the research field of Chinese classical poetry is to analyze the poetic style. By examining the relevant works of previous dynasties, researchers judge a poetic style mostly by their subjective feelings, and refer to the previous evaluations that have become a certain conclusion. Although this judgment method is often effective, there may be some errors. This paper builds the most perfect data set of Chinese classical poetry at present, trains a BART-poem pre -trained model on this data set, and puts forward a generally applicable poetry style judgment method based on this BART-poem model, innovatively introduces in-depth learning into the field of computational stylistics, and provides a new research method for the study of classical poetry. This paper attempts to use this method to solve the problem of poetry style identification in the Tang and Song Dynasties, and takes the poetry schools that are considered to have a relatively clear and consistent poetic style, such as the Hongzheng Qizi and Jiajing Qizi, Jiangxi poetic school and Tongguang poetic school, as the research object, and takes the poems of their representative poets for testing. Experiments show that the judgment results of the tested poetry work made by the model are basically consistent with the conclusions given by critics of previous dynasties, verify some avant-garde judgments of Mr. Qian Zhongshu, and better solve the task of poetry style recognition in the Tang and Song dynasties.
translated by 谷歌翻译
相干显微镜技术提供了跨科学和技术领域的材料的无与伦比的多尺度视图,从结构材料到量子设备,从综合电路到生物细胞。在构造更明亮的来源和高速探测器的驱动下,连贯的X射线显微镜方法(如Ptychography)有望彻底改变纳米级材料的特征。但是,相关的数据和计算需求显着增加意味着,常规方法不再足以从高速相干成像实验实时恢复样品图像。在这里,我们演示了一个工作流程,该工作流利用边缘的人工智能和高性能计算,以实现直接从检测器直接从检测器流出的X射线ptychography数据实时反演。拟议的AI支持的工作流程消除了传统的Ptychography施加的采样约束,从而使用比传统方法所需的数据较少的数据级允许低剂量成像。
translated by 谷歌翻译
最近的预训练的语言模型(PLM)通过学习语言特征和上下文化的句子表示,在许多自然语言处理任务上取得了巨大成功。由于未清楚地识别出在PLM的堆叠层中捕获的属性,因此通常首选嵌入最后一层的直接方法,而不是从PLM中得出句子表示。本文介绍了基于注意力的合并策略,该策略使该模型能够保留每一层中捕获的图层信号,并学习下游任务的消化语言特征。对比度学习目标可以使层面上的注意力汇集到无监督和监督的举止。它导致预先训练嵌入的各向异性空间并更均匀。我们评估我们的模型关于标准语义文本相似性(STS)和语义搜索任务。结果,我们的方法改善了基础对比度的BERT_BASE和变体的性能。
translated by 谷歌翻译
数据稀疏性是语法误差校正(GEC)的众所周知的问题。生成合成训练数据是针对此问题的一种广泛提出的解决方案,并允许模型近年来实现最新的(SOTA)性能。但是,这些方法通常会产生不切实际的错误,或者旨在仅一个错误生成句子。我们提出了一种基于学习的两个阶段方法,用于GEC的合成数据生成,从而放宽了仅包含一个错误的句子的约束。错误是根据句子优点产生的。我们表明,经过合成生成的语料库训练的GEC模型优于先前工作的合成数据的模型。
translated by 谷歌翻译
随着计算机视觉和NLP的进步,视觉语言(VL)正在成为研究的重要领域。尽管很重要,但研究领域的评估指标仍处于开发的初步阶段。在本文中,我们提出了定量度量的“符号分数”和评估数据集“人类难题”,以评估VL模型是否理解像人类这样的图像。我们观察到,VL模型没有解释输入图像的整体上下文,而是对形成本地上下文的特定对象或形状显示出偏差。我们旨在定量测量模型在理解环境中的表现。为了验证当前现有VL模型的功能,我们将原始输入图像切成零件并随机放置,从而扭曲了图像的全局上下文。我们的论文讨论了每个VL模型在全球环境上的解释水平,并解决了结构特征如何影响结果。
translated by 谷歌翻译
我们提出了一种新方法,可以在2D超声心动图图像上自动轮廓左心室。与大多数基于预测细分面罩的现有分割方法不同,我们重点是预测该轮廓内(基础和顶点)中的心内膜轮廓和关键地标点。这提供了一种更接近专家如何执行手动注释的表示,因此产生了在生理上更合理的结果。我们提出的方法使用基于U-NET体系结构的两头网络。一个头预测了7个轮廓点,另一个头部预测了轮廓的距离图。将这种方法与U-NET和基于点的方法进行了比较,在具有里程碑意义的定位(<4.5mm)和与地面真相轮廓(<3.5mm)的距离方面,达到30 \%的性能增长。
translated by 谷歌翻译
准确的不确定性估计是医学成像社区的关键需求。已经提出了多种方法,所有直接扩展分类不确定性估计技术。独立像素的不确定性估计通常基于神经网络的概率解释,不考虑解剖学的先验知识,因此为许多细分任务提供了次优的结果。因此,我们提出了不确定性预测方法的酥脆图像分割。 Crisp以其核心实现了一种对比的方法来学习一个共同的潜在空间,该方法编码有效分割及其相应图像的分布。我们使用此联合潜在空间将预测与数千个潜在矢量进行比较,并提供解剖学上一致的不确定性图。在涉及不同方式和器官的四个医学图像数据库上进行的综合研究强调了我们方法的优势与最先进的方法相比。
translated by 谷歌翻译